Goto

Collaborating Authors

 make artificial intelligence 100


'Turbocharged' artificial synapses could make artificial intelligence 100 times more efficient Genetic Literacy Project

#artificialintelligence

The catch is that neural nets, which are modeled loosely on the structure of the human brain, are typically constructed in software rather than hardware, and the software runs on conventional computer chips. IBM has now shown that building key features of a neural net directly in silicon can make it 100 times more efficient. Chips built this way might turbocharge machine learning in coming years. The IBM chip, like a neural net written in software, mimics the synapses that connect individual neurons in a brain. The strength of these synaptic connections needs to be tuned in order for the network to learn.


Intel wants to make artificial intelligence 100 times faster with new class of processors

#artificialintelligence

Machine-learning and artificial intelligence are considered by many to be radical tools that will revolutionize entire industries. Everything from self-driving cars, to our photo apps, Netflix recommendations, and digital assistants, is driven by this technology, and we'll become even more reliant upon it in the future. That's why Intel is looking to capitalize on this and has created its own AI-optimized chip. Most neural networks, machine-learning algorithms, and pretty much everything we'd describe as artificial intelligence currently relies on graphics cards. Both the learning, and a big part of the implementation is driven by GPUs which have proven remarkably adept at processing such data.